Monitoring Algorithmic Fairness

نویسندگان

چکیده

Abstract Machine-learned systems are in widespread use for making decisions about humans, and it is important that they fair , i.e., not biased against individuals based on sensitive attributes. We present runtime verification of algorithmic fairness whose models unknown, but assumed to have a Markov chain structure. introduce specification language can model many common properties, such as demographic parity, equal opportunity, social burden. build monitors observe long sequence events generated by given system, output, after each observation, quantitative estimate how or the system was run until point time. The proven be correct modulo variable error bound confidence level, where gets tighter observed longer. Our two types, use, respectively, frequentist Bayesian statistical inference techniques. While compute estimates objectively with respect ground truth, subject prior belief system’s model. Using prototype implementation, we show monitor if bank giving loans applicants from different backgrounds, college admitting students while maintaining reasonable financial burden society. Although exhibit theoretical complexities certain cases, our experiments, both took less than millisecond update their verdicts observation.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On Fairness, Diversity and Randomness in Algorithmic Decision Making

Consider a binary decision making process where a single machine learning classifier replaces a multitude of humans. We raise questions about the resulting loss of diversity in the decision making process. We study the potential benefits of using random classifier ensembles instead of a single classifier in the context of fairness-aware learning and demonstrate various attractive properties: (i...

متن کامل

Demographics and discussion influence views on algorithmic fairness

The field of algorithmic fairness has highlighted ethical questions which may not have purely technical answers. For example, different algorithmic fairness constraints are often impossible to satisfy simultaneously, and choosing between them requires value judgments about which people may disagree. Achieving consensus on algorithmic fairness will be difficult unless we understand why people di...

متن کامل

Network Monitoring on Multicores with Algorithmic Skeletons

Monitoring network traffic on 10 Gbit networks requires very efficient tools capable of exploiting modern multicore computing architectures. Specialized network cards can accelerate packet capture and thus reduce the processing overhead, but they can not achieve adequate packet analysis performance. For this reason most monitoring tools cannot cope with high network speeds. We describe the desi...

متن کامل

Statistical Fit and Algorithmic Fairness in Risk Adjustment for Health Policy

While risk adjustment is pervasive in the health care system, relatively little attention has been devoted to studying the fairness of these formulas for individuals who may be harmed by them. In practice, risk adjustment algorithms are often built with respect to statistical fit, as measured by p-values or R2 statistics. The main goal of a health plan payment risk adjustment system is to conve...

متن کامل

Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making

Calls for heightened consideration of fairness and accountability in algorithmically-informed public decisions—like taxation, justice, and child protection—are now commonplace. How might designers support such human values? We interviewed 27 public sector machine learning practitioners across 5 OECD countries regarding challenges understanding and imbuing public values into their work. The resu...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2023

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-031-37703-7_17